Structure-Consistent Weakly Supervised Salient Object Detection with Local Saliency Coherence

نویسندگان

چکیده

Sparse labels have been attracting much attention in recent years. However, the performance gap between weakly supervised and fully salient object detection methods is huge, most previous works adopt complex training with many bells whistles. In this work, we propose a one-round end-to-end approach for via scribble annotations without pre/post-processing operations or extra supervision data. Since fail to offer detailed regions, local coherence loss propagate unlabeled regions based on image features pixel distance, so as predict integral complete structures. We design saliency structure consistency self-consistent mechanism ensure consistent maps are predicted different scales of same input, which could be viewed regularization technique enhance model generalization ability. Additionally, an aggregation module (AGGM) better integrate high-level features, low-level global context information decoder aggregate various information. Extensive experiments show that our method achieves new state-of-the-art six benchmarks (e.g. ECSSD dataset: F? = 0.8995, E? 0.9079 MAE 0.0489), average gain 4.60% F-measure, 2.05% E-measure 1.88% over best performing task. Source code available at http://github.com/siyueyu/SCWSSOD.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Weakly Supervised Learning for Salient Object Detection

Recent advances of supervised salient object detection models demonstrate significant performance on benchmark datasets. Training such models, however, requires expensive pixel-wise annotations of salient objects. Moreover, many existing salient object detection models assume that at least a salient object exists in the input image. Such an impractical assumption leads to less appealing salienc...

متن کامل

Weakly Supervised Top-down Salient Object Detection

Top-down saliency models produce a probability map that peaks at target locations specified by a task/goal such as object detection. They are usually trained in a fully supervised setting involving pixel-level annotations of objects. We propose a weakly supervised top-down saliency framework using only binary labels that indicate the presence/absence of an object in an image. First, the probabi...

متن کامل

Weakly Supervised Salient Object Detection Using Image Labels

Deep learning based salient object detection has recently achieved great success with its performance greatly outperforms any other unsupervised methods. However, annotating per-pixel saliency masks is a tedious and inefficient procedure. In this paper, we note that superior salient object detection can be obtained by iteratively mining and correcting the labeling ambiguity on saliency maps fro...

متن کامل

Salient Object Detection via Saliency Spread

Salient object detection aims to localize the most attractive objects within an image. For such a goal, accurately determining the saliency values of image regions and keeping the saliency consistency of interested objects are two key challenges. To tackle the issues, we first propose an adaptive combination method of incorporating texture with the dominant color, for enriching the informativen...

متن کامل

Weakly Supervised Object Detection with Posterior Regularization

Motivation: In weakly supervised object detection where only the presence or absence of an object category as a binary label is available for training, the common practice is to model the object location with latent variables and jointly learn them with the object appearance model [1, 5]. An ideal weakly supervised learning method for object detection is expected to guide the latent variables t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i4.16434